翻訳と辞書
Words near each other
・ Statistical ensemble (mathematical physics)
・ Statistical epidemiology
・ Statistical field theory
・ Statistical finance
・ Statistical fluctuations
・ Statistical genetics
・ Statistical geography
・ Statistical graphics
・ Statistical hypothesis testing
・ Statistical inference
・ Statistical Institute of Catalonia
・ Statistical interference
・ Statistical Lab
・ Statistical language acquisition
・ Statistical learning in language acquisition
Statistical learning theory
・ Statistical Lempel–Ziv
・ Statistical literacy
・ Statistical machine translation
・ Statistical manifold
・ Statistical map
・ Statistical mechanics
・ Statistical Methods for Research Workers
・ Statistical Methods in Medical Research
・ Statistical model
・ Statistical murder
・ Statistical noise
・ Statistical Office of Slovenia
・ Statistical parameter
・ Statistical parametric mapping


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Statistical learning theory : ウィキペディア英語版
Statistical learning theory

Statistical learning theory is a framework for machine learning
drawing from the fields of statistics and functional analysis.〔Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) ''Foundations of Machine Learning'', The
MIT Press ISBN 9780262018258.〕
Statistical learning theory deals with the problem of finding a
predictive function based on data. Statistical learning
theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
==Introduction==
The goal of learning is prediction. Learning falls into many
categories, including supervised learning, unsupervised learning,
online learning, and reinforcement learning. From the perspective of
statistical learning theory, supervised learning is best understood.〔Tomaso Poggio, Lorenzo Rosasco, et al. ''Statistical Learning Theory and Applications'', 2012, Class 1 ()〕
Supervised learning involves learning from a training set of data.
Every point in the training is an input-output pair, where the input
maps to an output. The learning problem consists of inferring the
function that maps between the input and the output in a predictive fashion,
such that the learned function can be used to predict output from
future input.
Depending of the type of output, supervised learning problems are
either problems of regression or problems of classification. If the
output takes a continuous range of values, it is a regression problem.
Using Ohm's Law as an example, a regression could be performed with
voltage as input and current as output. The regression would find the
functional relationship between voltage and current to be
, such that
:
I = \frac V

Classification problems are those for which the output will be an
element from a discrete set of labels. Classification is very common
for machine learning applications. In facial recognition, for
instance, a picture of a person's face would be the input, and the
output label would be that person's name. The input would be
represented by a large multidimensional vector whose elements represent pixels in the picture.
After learning a function based on the training set data, that
function is validated on a test set of data, data that did not appear
in the training set.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Statistical learning theory」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.